By the time of the year 2022, writing had mainly been a job for humans. Computers only assisted to process what humans had written. Things have totally changed since the day ChatGPT, a generative AI tool, was introduced. More generative AI tools were developed afterwards, among which the most popular ones in China include DeepSeek, Doubao, Qianwen, Kimi, and Wenxinyiyan. Everyone now has the ability to ask any such tool to write a complete article with only a few words of manual prompts. Prompts are orders you make to a generative AI tool, things like commands you type into a computer with a command-line interface or requirements you get in the writing parts of exam papers. In this way anyone could be a writer as long as they got a computer with accesses to a generative AI app and the Internet.
If those tools were just used for fun, everything would be fine. However, more and more people are using generative AI in blogs and forums, resulting in a huge amount of rubbish flooding onto the Internet. I call such AI-generated text rubbish because it contains plenty of false information. In other words, AI lies to you. Researchers have called this phenomenon hallucination. In my opion, it's just an excuse for AI's lacking the ability to always give truth, which computer software has been required and expected to do. If given an article produced by AI, I'd ask how sure the author is that all the information is correct. Generative AI models are trained on the data collected and processed, and it's the trainers' responsibility to verify the genuineness of those materials. If fed with bad data, the models would probably fabricate evidence and facts in the passages they write. For example, if the model were told that Trump had lost the 2024 election, then it would generate a false statement that Joe Biden be president of the US in 2025. Another source of false information is the combination of unrelated things. For example, Jack went home at 6 p.m. yesterday and Jane went shopping at 7 p.m. the day before yesterday. Both facts are true, but AI would produce a wrong statement based on those facts: Jack went home at 6 p.m. yesterday and he went shopping at 7 p.m. the day before that. AI could do almost anything, which many AI people have boasted to me.
AI usually generates articles of relatively fixed forms and it may use obscure words, i.e. it's usually boring and sometimes hard to understand. Unlike human writers, AI loves to list things. It seems logical, but considering the existence of fake information, it's a waste of time to read such logics. Interestingly, the cheaper the tool, the worse the form. Advanced generative AI may produce articles of more flexible structures, but most AI-produced content on the Internet are done with cheap or even free models. Imagine how annoying it is when you want to search something online, you are provided with piles of AI-generated things --- repeated lists of potentially wrong statements. I don't doubt AI's power of learning and using language materials. Actually if we trained the AI model with Oxford English Dictionary or something similar, it would write an article with perfect grammar. But such an article could be a challenge even for an expert on English literature. Again, it's a waste of time to read such incomprehensible stuff.
AI generated text causes unfairness to both the conventional writers and the human readers. With generative AI, any guy could write an article of thousands of words in 5 minites no matter how good they are at writing. While for conventional human writers, it could take hours to complete the task. For those who rarely write things, it could cost days! Technologies are power of production, which many writers would agree, but it's not pleasant for them to see that more and more AI-produced books are published. Modern writers could have many new books out with AI in a day, while conventional writers would have to spend much more time before they finish a book. Articles and books are there for humans to read. If you need 10 minites to read and understand something that was created within 30 seconds, wouldn't you feel that your intelligence is quite limited? If after reading, you find almost all you read is sheer nonsense, wouldn't you feel angry with it for having wasted your life? If it's what you will experience nearly every time you search the Internet, wouldn't you complain about the abuse of generative AI? Although human proofreading can eliminate false information from AI-produced text, in fact people who love using generative tools tend to skip it. As for why they love to use those tools, the reason would probably be the efficiency to produce vase quantities of seemingly useful content to attract people and to make more money.
Unchecked AI-generated content has polluted the Internet. At the same time, an increasing number of new models are being trained on the contaminated resources online. Feeding a rubbish machine with rubbish won't make it better. Conversely, more untruths may be propagated throughout the cyber-space.
However, compared to the damage of AI to the credibility of the Internet, it's much worse that AI may make human beings less creative and less diligent. A research has shown that the abuse of generative AI has bad effects on our brains1. LLM (Large Language Model, a concrete form of generative AI) users were reported to "consistently underperform at neural, linguistic and behavioral levels". The harmful implications of LLM reliance should be a concern to us. It's more disturbing to think that AI's becoming smarter while humans without AI are getting sillier. Some people worry that one day humans be replaced by AI creatures. Actually at present, some of us have already been superseded by AI to some extent! Men are born to be lazy, and AI worsens our laziness. No one can stop the steps of AI, but AI can stop the steps of humans.